337 research outputs found

    Initial results of multilevel principal components analysis of facial shape

    Get PDF
    Traditionally, active shape models (ASMs) do not make a distinction between groups in the subject population and they rely on methods such as (single-level) principal components analysis (PCA). Multilevel principal components analysis (PCA) allows one to model between-group effects and within-group effects explicitly. Three dimensional (3D) laser scans were taken from 240 subjects (38 Croatian female, 35 Croatian male, 40 English female, 40 English male, 23 Welsh female, 27 Welsh male, 23 Finnish female, and 24 Finnish male) and 21 landmark points were created subsequently for each scan. After Procrustes transformation, eigenvalues from mPCA and from single-level PCA based on these points were examined. mPCA indicated that the first two eigenvalues of largest magnitude related to within-groups components, but that the next largest eigenvalue related to between-groups components. Eigenvalues from single-level PCA always had a larger magnitude than either within-group or between-group eigenvectors at equivalent eigenvalue number. An examination of the first mode of variation indicated possible mixing of between-group and within-group effects in single-level PCA. Component scores for mPCA indicated clustering with country and gender for the between-groups components (as ex-pected), but not for the within-group terms (also as expected). Clustering of component scores for single-level PCA was harder to resolve. In conclusion, mPCA is viable method of forming shape models that offers distinct advantages over single-level PCA when groups occur naturally in the subject population

    What’s in a Smile? Initial results of multilevel principal components analysis of facial shape and image texture

    Get PDF
    Multilevel principal components analysis (mPCA) has previously been shown to provide a simple and straightforward method of forming point distribution models that can be used in (active) shape models. Here we extend the mPCA approach to model image texture as well as shape. As a test case, we consider a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Shape (in terms of landmark points) and image texture are considered separately in this initial analysis. Three-level models are constructed that contain levels for biological sex, “within-subject” variation (i.e., facial expression), and “between-subject” variation (i.e., all other sources of variation). By considering eigenvalues, we find that the order of importance as sources of variation for facial shape is: facial expression (47.5%), between-subject variations (45.1%), and then biological sex (7.4%). By contrast, the order for image texture is: between-subject variations (55.5%), facial expression (37.1%), and then biological sex (7.4%). The major modes for the facial expression level of the mPCA models clearly reflect changes in increased mouth size and increased prominence of cheeks during smiling for both shape and texture. Even subtle effects such as changes to eyes and nose shape during smile are seen clearly. The major mode for the biological sex level of the mPCA models similarly relates clearly to changes between male and female. Model fits yield “scores” for each principal component that show strong clustering for both shape and texture by biological sex and facial expression at appropriate levels of the model. We conclude that mPCA correctly decomposes sources of variation due to biological sex and facial expression (etc.) and that it provides a reliable method of forming models of both shape and image texture

    MOON: A Mixed Objective Optimization Network for the Recognition of Facial Attributes

    Full text link
    Attribute recognition, particularly facial, extracts many labels for each image. While some multi-task vision problems can be decomposed into separate tasks and stages, e.g., training independent models for each task, for a growing set of problems joint optimization across all tasks has been shown to improve performance. We show that for deep convolutional neural network (DCNN) facial attribute extraction, multi-task optimization is better. Unfortunately, it can be difficult to apply joint optimization to DCNNs when training data is imbalanced, and re-balancing multi-label data directly is structurally infeasible, since adding/removing data to balance one label will change the sampling of the other labels. This paper addresses the multi-label imbalance problem by introducing a novel mixed objective optimization network (MOON) with a loss function that mixes multiple task objectives with domain adaptive re-weighting of propagated loss. Experiments demonstrate that not only does MOON advance the state of the art in facial attribute recognition, but it also outperforms independently trained DCNNs using the same data. When using facial attributes for the LFW face recognition task, we show that our balanced (domain adapted) network outperforms the unbalanced trained network.Comment: Post-print of manuscript accepted to the European Conference on Computer Vision (ECCV) 2016 http://link.springer.com/chapter/10.1007%2F978-3-319-46454-1_

    Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment

    Full text link
    Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection. Most existing AU detection works often treat face alignment as a preprocessing and handle the two tasks independently. In this paper, we propose a novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before. In particular, multi-scale shared features are learned firstly, and high-level features of face alignment are fed into AU detection. Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively. Finally, the assembled local features are integrated with face alignment features and global features for AU detection. Experiments on BP4D and DISFA benchmarks demonstrate that our framework significantly outperforms the state-of-the-art methods for AU detection.Comment: This paper has been accepted by ECCV 201

    A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment

    Full text link
    Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis

    Endoscopic navigation in the absence of CT imaging

    Full text link
    Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference image to provide structural context to the clinician. In this paper, we present a system for navigation during clinical endoscopic exploration in the absence of computed tomography (CT) scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm along with dense reconstructions from video, we show that we are able to achieve submillimeter registrations in in-vivo clinical data and are able to assign confidence to these registrations using confidence criteria established using simulated data.Comment: 8 pages, 3 figures, MICCAI 201

    Human motion segmentation using active shape models

    Get PDF
    Human motion analysis from images is meticulously related to thedevelopment of computational techniques capable of automatically identifying,tracking and analyzing relevant structures of the body. This work explores theidentification of such structures in images, which is the first step of any computationalsystem designed to analyze human motion. A widely used database(CASIA Gait Database) was used to build a Point Distribution Model (PDM) of thestructure of the human body. The training dataset was composed of 14 subjectswalking in four directions, and each shape was represented by a set of 113 labelledlandmark points. These points were composed of 100 contour points automaticallyextracted from the silhouette combined with an additional 13 anatomical pointsfrom elbows, knees and feet manually annotated. The PDM was later used in theconstruction of an Active Shape Model, which combines the shape model with graylevel profiles, in order to segment the modelled human body in new images. Theexperiments with this segmentation technique revealed very encouraging results asit was able to gather the necessary data of subjects walking in different directionsusing just one segmentation model

    Structured Landmark Detection via Topology-Adapting Deep Graph Learning

    Full text link
    Image landmark detection aims to automatically identify the locations of predefined fiducial points. Despite recent success in this field, higher-ordered structural modeling to capture implicit or explicit relationships among anatomical landmarks has not been adequately exploited. In this work, we present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical (e.g., hand, pelvis) landmark detection. The proposed method constructs graph signals leveraging both local image features and global shape features. The adaptive graph topology naturally explores and lands on task-specific structures which are learned end-to-end with two Graph Convolutional Networks (GCNs). Extensive experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis). Quantitative results comparing with the previous state-of-the-art approaches across all studied datasets indicating the superior performance in both robustness and accuracy. Qualitative visualizations of the learned graph topologies demonstrate a physically plausible connectivity laying behind the landmarks.Comment: Accepted to ECCV-20. Camera-ready with supplementary materia

    Extended Supervised Descent Method for Robust Face Alignment

    Full text link
    Abstract. Supervised Descent Method (SDM) is a highly efficient and accurate approach for facial landmark locating/face alignment. It learns a sequence of descent directions that minimize the difference between the estimated shape and the ground truth in HOG feature space during training, and utilize them in testing to predict shape increment itera-tively. In this paper, we propose to modify SDM in three respects: 1) Multi-scale HOG features are applied orderly as a coarse-to-fine feature detector; 2) Global to local constraints of the facial features are con-sidered orderly in regression cascade; 3) Rigid Regularization is applied to obtain more stable prediction results. Extensive experimental result-s demonstrate that each of the three modifications could improve the accuracy and robustness of the traditional SDM methods. Furthermore, enhanced by the three-fold improvements, the extended SDM compares favorably with other state-of-the-art methods on several challenging face data sets, including LFPW, HELEN and 300 Faces in-the-wild.
    • …
    corecore